Efficient reinforcement learning through Evolutionary Acquisition of Neural Topologies
نویسندگان
چکیده
In this paper we present a novel method, called Evolutionary Acquisition of Neural Topologies (EANT), of evolving the structure and weights of neural networks. The method introduces an efficient and compact genetic encoding of a neural network onto a linear genome that enables one to evaluate the network without decoding it. The method explores new structures whenever it is not possible to further exploit the structures found so far. This enables it to find minimal neural structures for solving a given learning task. We tested the algorithm on a benchmark control task and found it to perform very well.
منابع مشابه
Automatic Neural Robot Controller Design using Evolutionary Acquisition of Neural Topologies
In this paper we present an automatic design of neural controllers for robots using a method called Evolutionary Acquisition of Neural Topologies (EANT). The method evolves both the structure and weights of neural networks. It starts with networks of minimal structures determined by the domain expert and increases their complexity along the evolution path. It introduces an efficient and compact...
متن کاملSelf-Organisation of Neural Topologies by Evolutionary Reinforcement Learning
In this article we present EANT, “Evolutionary Acquisition of Neural Topologies”, a method that creates neural networks (NNs) by evolutionary reinforcement learning. The structure of NNs is developed using mutation operators, starting from a minimal structure. Their parameters are optimised using CMA-ES. EANT can create NNs that are very specialised; they achieve a very good performance while b...
متن کاملEvolutionary reinforcement learning of artificial neural networks
In this article we describe EANT2, Evolutionary Acquisition of Neural Topologies, Version 2, a method that creates neural networks by evolutionary reinforcement learning. The structure of the networks is developed using mutation operators, starting from a minimal structure. Their parameters are optimised using CMA-ES, Covariance Matrix Adaptation Evolution Strategy, a derandomised variant of ev...
متن کاملAnalysis of an evolutionary reinforcement learning method in a multiagent domain
Many multiagent problems comprise subtasks which can be considered as reinforcement learning (RL) problems. In addition to classical temporal difference methods, evolutionary algorithms are among the most promising approaches for such RL problems. The relative performance of these approaches in certain subdomains (e. g. multiagent learning) of the general RL problem remains an open question at ...
متن کاملEvolving Neural Network through Augmenting Topologies
An important question in neuroevolution is how to gain an advantage from evolving neural network topologies along with weights. We present a method, NeuroEvolution of Augmenting Topologies (NEAT), which outperforms the best fixed-topology method on a challenging benchmark reinforcement learning task. We claim that the increased efficiency is due to (1) employing a principled method of crossover...
متن کامل